Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Working Cycle Metric #87

Merged
merged 11 commits into from
May 13, 2024
Merged

Working Cycle Metric #87

merged 11 commits into from
May 13, 2024

Conversation

ccrock4t
Copy link
Collaborator

@ccrock4t ccrock4t commented Jan 11, 2024

This one adds time metrics for the working cycle, to solve #58

I will also try to determine the time complexity of the current code, since we want to ensure a small upper bound.

@ccrock4t ccrock4t linked an issue Jan 11, 2024 that may be closed by this pull request
@ccrock4t ccrock4t marked this pull request as draft January 11, 2024 20:37
@ccrock4t
Copy link
Collaborator Author

ccrock4t commented Jan 11, 2024

image
Top level function

To verify: consider() [VERIFIED]
To verify: observe() [VERIFIED]

@ccrock4t ccrock4t self-assigned this Jan 16, 2024
@ccrock4t
Copy link
Collaborator Author

ccrock4t commented Jan 18, 2024

image

consider() function

@ccrock4t
Copy link
Collaborator Author

ccrock4t commented Jan 18, 2024

image

observe() function

@ccrock4t
Copy link
Collaborator Author

ccrock4t commented Jan 18, 2024

image

memory.accept()

@ccrock4t
Copy link
Collaborator Author

ccrock4t commented Jan 18, 2024

After a rough analysis, the code seems to check out for running with a short-time upper bound

@ccrock4t
Copy link
Collaborator Author

Cycle metrics are now computed, and stored in the Reasoner object

They should probably be added to the GUI in #82

@ccrock4t ccrock4t marked this pull request as ready for review January 18, 2024 22:00
@ccrock4t
Copy link
Collaborator Author

The metric can be displayed in ConsolePlus using a new command /cycles

@maxeeem
Copy link
Collaborator

maxeeem commented Jan 23, 2024

@ccrock4t is this pr ready or you still plan on pushing more changes?

@ccrock4t
Copy link
Collaborator Author

it's ready, I have just found some issues in self-reviewing

Copy link

github-actions bot commented Jan 31, 2024

dev.txt - DEV BRANCH: FAILED (failures=54, errors=13)
pr.txt - PR BRANCH: FAILED (failures=58, errors=13)

--- dev.txt	2024-05-03 23:47:41.441223917 +0000
+++ pr.txt	2024-05-03 23:47:41.441223917 +0000
@@ -27,7 +27,7 @@
 test_backward_inference (test_NAL2.TEST_NAL2)
 'Backward inference ... ok
 test_comparison (test_NAL2.TEST_NAL2)
-'Comparison ... ok
+'Comparison ... FAIL
 test_conversions_between_inheritance_and_similarity (test_NAL2.TEST_NAL2)
 'Conversions between inheritance and similarity ... ok
 test_conversions_between_inheritance_and_similarity_0 (test_NAL2.TEST_NAL2)
@@ -138,9 +138,9 @@
 test_conditional_analogy (test_NAL5.TEST_NAL5)
 'Detachment ... ok
 test_conditional_deduction_0 (test_NAL5.TEST_NAL5)
-'Detachment ... ok
+'Detachment ... FAIL
 test_conditional_deduction_compound_eliminate_0 (test_NAL5.TEST_NAL5)
-'conditional deduction ... ok
+'conditional deduction ... FAIL
 test_conditional_deduction_compound_eliminate_1 (test_NAL5.TEST_NAL5)
 'conditional deduction ... ok
 test_conditional_deduction_compound_replace_0 (test_NAL5.TEST_NAL5)
@@ -226,7 +226,7 @@
 test_multiple_variables_introduction_0 (test_NAL6.TEST_NAL6)
 'Multiple variables introduction ... FAIL
 test_multiple_variables_introduction_1 (test_NAL6.TEST_NAL6)
-'Multiple variables introduction ... ok
+'Multiple variables introduction ... FAIL
 test_nlp1 (test_NAL6.TEST_NAL6)
 <(\,REPRESENT,_,CAT) --> cat>. %1.00;0.90% ... ok
 test_nlp2 (test_NAL6.TEST_NAL6)
@@ -260,7 +260,7 @@
 test_unification_4 (test_NAL6.TEST_NAL6)
 'Variable unification ... ok
 test_unification_5 (test_NAL6.TEST_NAL6)
-'Variable unification ... ok
+'Variable unification ... ERROR
 test_unification_5_1 (test_NAL6.TEST_NAL6)
 'Variable unification ... ok
 test_unification_6 (test_NAL6.TEST_NAL6)
@@ -353,7 +353,7 @@
 test_1_13_var (test_NAL8.TEST_NAL8)
 nal8.1.13.nal ... ERROR
 test_1_14 (test_NAL8.TEST_NAL8)
-nal8.1.14.nal ... FAIL
+nal8.1.14.nal ... ok
 test_1_16 (test_NAL8.TEST_NAL8)
 nal8.1.16.nal ... FAIL
 test_1_2 (test_NAL8.TEST_NAL8)
@@ -361,7 +361,7 @@
 test_1_3 (test_NAL8.TEST_NAL8)
 nal8.1.3.nal ... ok
 test_1_3_var (test_NAL8.TEST_NAL8)
-nal8.1.3.nal ... ERROR
+nal8.1.3.nal ... FAIL
 test_1_4 (test_NAL8.TEST_NAL8)
 nal8.1.4.nal ... ok
 test_1_4_var (test_NAL8.TEST_NAL8)

@maxeeem
Copy link
Collaborator

maxeeem commented Jan 31, 2024

@ccrock4t I think the calculations aren't quite correct. I keep getting Cycles per second is 1. every time. Should we be keeping a running average, similarly to what I did with the inference metrics? https://github.com/bowen-xu/PyNARS/pull/17#issuecomment-1910650589

@ccrock4t
Copy link
Collaborator Author

@ccrock4t I think the calculations aren't quite correct. I keep getting Cycles per second is 1. every time. Should we be keeping a running average, similarly to what I did with the inference metrics? #17 (comment)

You're right, I'm sorry, this was not the behavior I got earlier. I will look into it. I also agree that a running average will be nice

@ccrock4t
Copy link
Collaborator Author

I see, it is because 1 second minimum has to elapse before a metric is recorded. So some sort of avg or extrapolation will be necessary

@ccrock4t ccrock4t marked this pull request as draft April 4, 2024 19:18
@ccrock4t ccrock4t marked this pull request as ready for review May 2, 2024 02:32
@ccrock4t ccrock4t requested a review from maxeeem May 2, 2024 02:32
pynars/Config.py Outdated
@@ -84,6 +84,9 @@ class Config:

maximum_evidental_base_length = 20000

# cycle metrics
cycle_durations_window_length = 100
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason to keep a fixed window instead of continuously calculated average?

# calculate average
self.cycles_count += 1
self.avg_cycle_duration += (self.last_cycle_duration - self.avg_cycle_duration) / self.cycles_count

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My thinking was the variables could overflow if we don't set an upper limit to the sum/ cycles count. For example in this code the cycles_count is never capped. But I guess in practice NARS would never run for enough cycles to overflow, and it seems Python is also special in that it will infinitely extend integers so they never overflow

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Python is also special in that it will infinitely extend integers so they never overflow

Wow, really? That's cool, do you have a link? just for my own education ;)

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ccrock4t ccrock4t merged commit c8b8604 into dev May 13, 2024
1 check passed
@ccrock4t ccrock4t deleted the working_cycle_time_metric branch May 13, 2024 20:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

[Technical Report] [Metrics] Working Cycle Upper Bound Metric
2 participants